当预测不久的将来的环境中的要素状态时,Endley情况意识模型的最高级别称为投影。在网络安全状况的意识中,对高级持续威胁(APT)的投影需要预测APT的下一步。威胁正在不断变化,变得越来越复杂。由于受监督和无监督的学习方法需要APT数据集​​来投影APT的下一步,因此他们无法识别未知的APT威胁。在强化学习方法中,代理与环境相互作用,因此它可能会投射出已知和未知APT的下一步。到目前为止,尚未使用强化学习来计划APTS的下一步。在强化学习中,代理商使用先前的状态和行动来近似当前状态的最佳动作。当状态和行动的数量丰富时,代理人采用神经网络,该网络被称为深度学习来近似每个州的最佳动作。在本文中,我们提出了一个深厚的加固学习系统,以预测APT的下一步。随着攻击步骤之间的某种关系,我们采用长期短期记忆(LSTM)方法来近似每个状态的最佳动作。在我们提出的系统中,根据当前情况,我们将投影APT威胁的下一步。
translated by 谷歌翻译
Recently, Smart Video Surveillance (SVS) systems have been receiving more attention among scholars and developers as a substitute for the current passive surveillance systems. These systems are used to make the policing and monitoring systems more efficient and improve public safety. However, the nature of these systems in monitoring the public's daily activities brings different ethical challenges. There are different approaches for addressing privacy issues in implementing the SVS. In this paper, we are focusing on the role of design considering ethical and privacy challenges in SVS. Reviewing four policy protection regulations that generate an overview of best practices for privacy protection, we argue that ethical and privacy concerns could be addressed through four lenses: algorithm, system, model, and data. As an case study, we describe our proposed system and illustrate how our system can create a baseline for designing a privacy perseverance system to deliver safety to society. We used several Artificial Intelligence algorithms, such as object detection, single and multi camera re-identification, action recognition, and anomaly detection, to provide a basic functional system. We also use cloud-native services to implement a smartphone application in order to deliver the outputs to the end users.
translated by 谷歌翻译
A recent explosion of research focuses on developing methods and tools for building fair predictive models. However, most of this work relies on the assumption that the training and testing data are representative of the target population on which the model will be deployed. However, real-world training data often suffer from selection bias and are not representative of the target population for many reasons, including the cost and feasibility of collecting and labeling data, historical discrimination, and individual biases. In this paper, we introduce a new framework for certifying and ensuring the fairness of predictive models trained on biased data. We take inspiration from query answering over incomplete and inconsistent databases to present and formalize the problem of consistent range approximation (CRA) of answers to queries about aggregate information for the target population. We aim to leverage background knowledge about the data collection process, biased data, and limited or no auxiliary data sources to compute a range of answers for aggregate queries over the target population that are consistent with available information. We then develop methods that use CRA of such aggregate queries to build predictive models that are certifiably fair on the target population even when no external information about that population is available during training. We evaluate our methods on real data and demonstrate improvements over state of the art. Significantly, we show that enforcing fairness using our methods can lead to predictive models that are not only fair, but more accurate on the target population.
translated by 谷歌翻译
In recent years, we have seen a significant interest in data-driven deep learning approaches for video anomaly detection, where an algorithm must determine if specific frames of a video contain abnormal behaviors. However, video anomaly detection is particularly context-specific, and the availability of representative datasets heavily limits real-world accuracy. Additionally, the metrics currently reported by most state-of-the-art methods often do not reflect how well the model will perform in real-world scenarios. In this article, we present the Charlotte Anomaly Dataset (CHAD). CHAD is a high-resolution, multi-camera anomaly dataset in a commercial parking lot setting. In addition to frame-level anomaly labels, CHAD is the first anomaly dataset to include bounding box, identity, and pose annotations for each actor. This is especially beneficial for skeleton-based anomaly detection, which is useful for its lower computational demand in real-world settings. CHAD is also the first anomaly dataset to contain multiple views of the same scene. With four camera views and over 1.15 million frames, CHAD is the largest fully annotated anomaly detection dataset including person annotations, collected from continuous video streams from stationary cameras for smart video surveillance applications. To demonstrate the efficacy of CHAD for training and evaluation, we benchmark two state-of-the-art skeleton-based anomaly detection algorithms on CHAD and provide comprehensive analysis, including both quantitative results and qualitative examination.
translated by 谷歌翻译
Gaussian Mixture Models (GMM) are one of the most potent parametric density estimators based on the kernel model that finds application in many scientific domains. In recent years, with the dramatic enlargement of data sources, typical machine learning algorithms, e.g. Expectation Maximization (EM), encounters difficulty with high-dimensional and streaming data. Moreover, complicated densities often demand a large number of Gaussian components. This paper proposes a fast online parameter estimation algorithm for GMM by using first-order stochastic optimization. This approach provides a framework to cope with the challenges of GMM when faced with high-dimensional streaming data and complex densities by leveraging the flexibly-tied factorization of the covariance matrix. A new stochastic Manifold optimization algorithm that preserves the orthogonality is introduced and used along with the well-known Euclidean space numerical optimization. Numerous empirical results on both synthetic and real datasets justify the effectiveness of our proposed stochastic method over EM-based methods in the sense of better-converged maximum for likelihood function, fewer number of needed epochs for convergence, and less time consumption per epoch.
translated by 谷歌翻译
In atomistic simulations of solids, ability to classify crystal phases and lattice defects in the presence of thermal fluctuations is essential for gaining deeper insights into the simulated dynamics. The need for accurate and efficient characterization methods is especially acute in presently emerging large-scale simulations of multi-phase systems far from equilibrium. Taking the perspective that delineating order and disorder features from ubiquitous thermal vibrations is akin to extracting signal from noise, we consider classification of ordered phases and identification of disordered crystal defects to be fundamentally the same problem and address them both with a unified approach: a denoising score function that removes thermal noise and recovers any underlying crystalline order-disorder. Built on a rotationally equivariant graph neural network (NequIP), the denoiser was trained entirely with synthetically noised structures and requires no simulation data during training. To demonstrate its denoising capabilities, the denoiser is shown to effectively remove thermal vibrations of BCC, FCC, and HCP crystal structures without impacting the underlying disordered defects, including point defects, dislocations, grain boundaries, and liquid disorder. In particular the denoiser was applied to two relatively complex MD simulations that present practical challenges: a Cu solidification trajectory involving a polymorphic nucleus, and a trajectory of BCC Ta undergoing plastic deformation resulting in dislocation networks and point defect clusters. In both cases the denoiser facilitates or trivializes the subsequent characterization of the order-disorder features. Lastly, we outline future work to extend our denoising model to more complex crystal structures and to multi-element systems.
translated by 谷歌翻译
People capture photos and videos to relive and share memories of personal significance. Recently, media montages (stories) have become a popular mode of sharing these memories due to their intuitive and powerful storytelling capabilities. However, creating such montages usually involves a lot of manual searches, clicks, and selections that are time-consuming and cumbersome, adversely affecting user experiences. To alleviate this, we propose task-oriented dialogs for montage creation as a novel interactive tool to seamlessly search, compile, and edit montages from a media collection. To the best of our knowledge, our work is the first to leverage multi-turn conversations for such a challenging application, extending the previous literature studying simple media retrieval tasks. We collect a new dataset C3 (Conversational Content Creation), comprising 10k dialogs conditioned on media montages simulated from a large media collection. We take a simulate-and-paraphrase approach to collect these dialogs to be both cost and time efficient, while drawing from natural language distribution. Our analysis and benchmarking of state-of-the-art language models showcase the multimodal challenges present in the dataset. Lastly, we present a real-world mobile demo application that shows the feasibility of the proposed work in real-world applications. Our code and data will be made publicly available.
translated by 谷歌翻译
The SNMMI Artificial Intelligence (SNMMI-AI) Summit, organized by the SNMMI AI Task Force, took place in Bethesda, MD on March 21-22, 2022. It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA), and considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine. In what follows, essential issues, challenges, controversies and findings emphasized in the meeting are summarized.
translated by 谷歌翻译
在本文中,我们提出了一个样本复杂性,以从嘈杂的样本中学习单纯形。给出了$ n $的数据集,其中包括i.i.d.样品从$ \ mathbb {r}^k $中的未知任意单纯形上的均匀分布中得出,其中假定样品被任意幅度的加性高斯噪声损坏。我们提出了一种策略,该策略可以输出一个单纯概率,总变化距离为$ \ epsilon + o \ left(\ mathrm {snr}^{ - 1} \ right)$从true Simplex中,对于任何$ \ Epsilon> 0 $。我们证明,要接近True Simplex,就足以拥有$ n \ ge \ tilde {o} \ left(k^2/\ epsilon^2 \ right)$ samples。在这里,SNR代表信噪比,可以看作是单纯形直径与噪声的标准偏差的比率。我们的证明是基于样品压缩技术的最新进步,这些进步已经显示出在高维高斯混合模型中的密度估计的紧密范围方面的承诺。
translated by 谷歌翻译
计算机视觉技术可以帮助自动化或部分自动化口面损伤的临床检查,以提供准确和客观的评估。为了开发此类自动化系统,我们评估了两种在口面评估视频中检测和时间分段(分析)重复的方法。从多伦多神经曲面数据集获得了患有肌萎缩性侧索硬化症(ALS)和健康对照(HC)个体的参与者的录制视频。检查了两种重复检测和解析方法:一种基于轨迹地标的工程特征和上嘴唇和下唇的朱红色 - 二连交界之间的距离(基线分析)的峰值检测(基线分析),另一种是使用预训练的变压器 - 基于repnet的基于深度学习模型(Dwibedi等,2020),该模型自动检测周期性,并在视频数据中解析周期性和半周期重复。在对两项口面评估任务的实验评估中 - 重复最大的口腔张开(打开)并重复“购买Bobby a Puppy”(BBP)(BBP) - repnet提供了比基于具有里程碑意义的方法更好的解析,并通过较高的平均相交量化的方法来量化。联合(IOU)关于地面真理手动解析。使用Repnet自动解析还根据BBP重复的持续时间清楚地分离了HC和ALS参与者,而基于里程碑的方法则不能。
translated by 谷歌翻译